marginalized community
QUINTA: Reflexive Sensibility For Responsible AI Research and Data-Driven Processes
As the field of artificial intelligence (AI) and machine learning (ML) continues to prioritize fairness and the concern for historically marginalized communities, the importance of intersectionality in AI research has gained significant recognition. However, few studies provide practical guidance on how researchers can effectively incorporate intersectionality into critical praxis. In response, this paper presents a comprehensive framework grounded in critical reflexivity as intersectional praxis. Operationalizing intersectionality within the AI/DS (Artificial Intelligence/Data Science) pipeline, Quantitative Intersectional Data (QUINTA) is introduced as a methodological paradigm that challenges conventional and superficial research habits, particularly in data-centric processes, to identify and mitigate negative impacts such as the inadvertent marginalization caused by these practices. The framework centers researcher reflexivity to call attention to the AI researchers' power in creating and analyzing AI/DS artifacts through data-centric approaches. To illustrate the effectiveness of QUINTA, we provide a reflexive AI/DS researcher demonstration utilizing the \#metoo movement as a case study. Note: This paper was accepted as a poster presentation at Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO) Conference in 2023.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > Cuba (0.04)
- Asia > Japan (0.04)
- Law > Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.48)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.87)
- Information Technology > Data Science > Data Mining (0.83)
The Cloud Weaving Model for AI development
Kim, Darcy, Kalender, Aida, Ghebreab, Sennay, Sileno, Giovanni
While analysing challenges in pilot projects developing AI with marginalized communities, we found it difficult to express them within commonly used paradigms. We therefore constructed an alternative conceptual framework to ground AI development in the social fabric -- the Cloud Weaving Model -- inspired (amongst others) by indigenous knowledge, motifs from nature, and Eastern traditions. This paper introduces and elaborates on the fundamental elements of the model (clouds, spiders, threads, spiderwebs, and weather) and their interpretation in an AI context. The framework is then applied to comprehend patterns observed in co-creation pilots approaching marginalized communities, highlighting neglected yet relevant dimensions for responsible AI development.
- North America > United States > New York > New York County > New York City (0.14)
- Asia > Japan > Honshū > Kantō > Kanagawa Prefecture > Yokohama (0.05)
- Europe > Netherlands > North Holland > Amsterdam (0.05)
- (15 more...)
- Health & Medicine (1.00)
- Government (0.93)
Provocation: Who benefits from "inclusion" in Generative AI?
Dalal, Samantha, Hall, Siobhan Mackenzie, Johnson, Nari
The demands for accurate and representative generative AI systems means there is an increased demand on participatory evaluation structures. While these participatory structures are paramount to to ensure non-dominant values, knowledge and material culture are also reflected in AI models and the media they generate, we argue that dominant structures of community participation in AI development and evaluation are not explicit enough about the benefits and harms that members of socially marginalized groups may experience as a result of their participation. Without explicit interrogation of these benefits by AI developers, as a community we may remain blind to the immensity of systemic change that is needed as well. To support this provocation, we present a speculative case study, developed from our own collective experiences as AI researchers. We use this speculative context to itemize the barriers that need to be overcome in order for the proposed benefits to marginalized communities to be realized, and harms mitigated.
- North America > United States > New York > New York County > New York City (0.06)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- (9 more...)
- Law (0.68)
- Government (0.46)
A Beloved Writing Organization Appears to Be Destroying Itself for the Dumbest Reason
It was an emotionally dark and stormy night in 2020 when I had the urge to write a novel. I'd been having panic attacks. To work through it, I decided to write a novel about an isolated mom and a monster in the woods, along with therapy. So that November, I participated in National Novel Writing Month (NaNoWriMo), which is also a nonprofit organization that encourages creative writing through a variety of events, including its most famous and titular program where participants attempt to write a complete novel (or 50,000 words) in the month of November. I loved the "flow state" of writing that came about as a result of participating.
Position: Cracking the Code of Cascading Disparity Towards Marginalized Communities
Farnadi, Golnoosh, Havaei, Mohammad, Rostamzadeh, Negar
The rise of foundation models holds immense promise for advancing AI, but this progress may amplify existing risks and inequalities, leaving marginalized communities behind. In this position paper, we discuss that disparities towards marginalized communities - performance, representation, privacy, robustness, interpretability and safety - are not isolated concerns but rather interconnected elements of a cascading disparity phenomenon. We contrast foundation models with traditional models and highlight the potential for exacerbated disparity against marginalized communities. Moreover, we emphasize the unique threat of cascading impacts in foundation models, where interconnected disparities can trigger long-lasting negative consequences, specifically to the people on the margin. We define marginalized communities within the machine learning context and explore the multifaceted nature of disparities. We analyze the sources of these disparities, tracing them from data creation, training and deployment procedures to highlight the complex technical and socio-technical landscape. To mitigate the pressing crisis, we conclude with a set of calls to action to mitigate disparity at its source.
Exploiting the Margin: How Capitalism Fuels AI at the Expense of Minoritized Groups
This paper explores the intricate relationship between capitalism, racial injustice, and artificial intelligence (AI), arguing that AI acts as a contemporary vehicle for age-old forms of exploitation. By linking historical patterns of racial and economic oppression with current AI practices, this study illustrates how modern technology perpetuates and deepens societal inequalities. It specifically examines how AI is implicated in the exploitation of marginalized communities through underpaid labor in the gig economy, the perpetuation of biases in algorithmic decision-making, and the reinforcement of systemic barriers that prevent these groups from benefiting equitably from technological advances. Furthermore, the paper discusses the role of AI in extending and intensifying the social, economic, and psychological burdens faced by these communities, highlighting the problematic use of AI in surveillance, law enforcement, and mental health contexts. The analysis concludes with a call for transformative changes in how AI is developed and deployed. Advocating for a reevaluation of the values driving AI innovation, the paper promotes an approach that integrates social justice and equity into the core of technological design and policy. This shift is crucial for ensuring that AI serves as a tool for societal improvement, fostering empowerment and healing rather than deepening existing divides.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > New York (0.05)
- Europe > United Kingdom > England > West Sussex (0.04)
- (6 more...)
- Government (1.00)
- Law > Civil Rights & Constitutional Law (0.90)
- Banking & Finance > Economy (0.68)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.34)
T-HITL Effectively Addresses Problematic Associations in Image Generation and Maintains Overall Visual Quality
Epstein, Susan, Chen, Li, Vecchiato, Alessandro, Jain, Ankit
Generative AI image models may inadvertently generate problematic representations of people. Past research has noted that millions of users engage daily across the world with these models and that the models, including through problematic representations of people, have the potential to compound and accelerate real-world discrimination and other harms (Bianchi et al, 2023). In this paper, we focus on addressing the generation of problematic associations between demographic groups and semantic concepts that may reflect and reinforce negative narratives embedded in social data. Building on sociological literature (Blumer, 1958) and mapping representations to model behaviors, we have developed a taxonomy to study problematic associations in image generation models. We explore the effectiveness of fine tuning at the model level as a method to address these associations, identifying a potential reduction in visual quality as a limitation of traditional fine tuning. We also propose a new methodology with twice-human-in-the-loop (T-HITL) that promises improvements in both reducing problematic associations and also maintaining visual quality. We demonstrate the effectiveness of T-HITL by providing evidence of three problematic associations addressed by T-HITL at the model level. Our contributions to scholarship are two-fold. By defining problematic associations in the context of machine learning models and generative AI, we introduce a conceptual and technical taxonomy for addressing some of these associations. Finally, we provide a method, T-HITL, that addresses these associations and simultaneously maintains visual quality of image model generations. This mitigation need not be a tradeoff, but rather an enhancement.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > New Jersey > Middlesex County > New Brunswick (0.04)
- North America > United States > Nebraska (0.04)
- (2 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Law > Labor & Employment Law (0.94)
- Law Enforcement & Public Safety (0.68)
- (2 more...)
The White House's "AI Bill of Rights" outlines five principles to make artificial intelligence safer, more transparent and less discriminatory
Despite the important and ever-increasing role of artificial intelligence in many parts of modern society, there is very little policy or regulation governing the development and use of AI systems in the United States. Tech companies have largely been left to regulate themselves in this arena, potentially leading to decisions and situations that have garnered criticism. Google fired an employee who publicly raised concerns over how a certain type of AI can contribute to environmental and social problems. Other AI companies have developed products that are used by organizations like the Los Angeles Police Department where they have been shown to bolster existing racially biased policies. There are some government recommendations and guidance regarding AI use.
- North America > United States > California > Los Angeles County > Los Angeles (0.55)
- North America > United States > New York (0.06)
Here Is The Future With AI - AI Summary
They believed that global population growth, industrialisation, pollution, food production and resource depletion would inevitably lead to economic collapse during the 21st century. They believed that global population growth, industrialisation, pollution, food production and resource depletion would inevitably lead to economic collapse during the 21st century. They were concerned not just about environmental issues, but also about economic ones: namely, that increasing population growth would lead to overconsumption and ultimately cause a collapse in resources (the so-called "Malthusian trap"). They believed that global population growth, industrialisation, pollution, food production and resource depletion would inevitably lead to economic collapse during the 21st century. They believed that global population growth, industrialisation, pollution, food production and resource depletion would inevitably lead to economic collapse during the 21st century.
Artificial Intelligence's Environmental Costs and Promise
Artificial intelligence (AI) is often presented in binary terms in both popular culture and political analysis. Either it represents the key to a futuristic utopia defined by the integration of human intelligence and technological prowess, or it is the first step toward a dystopian rise of machines. This same binary thinking is practiced by academics, entrepreneurs, and even activists in relation to the application of AI in combating climate change. The technology industry's singular focus on AI's role in creating a new technological utopia obscures the ways that AI can exacerbate environmental degradation, often in ways that directly harm marginalized populations. In order to utilize AI in fighting climate change in a way that both embraces its technological promise and acknowledges its heavy energy use, the technology companies leading the AI charge need to explore solutions to the environmental impacts of AI.
- North America > United States (0.31)
- Europe > Iceland (0.07)
- South America > Chile > Atacama Region > Copiapó Province > Copiapó (0.05)
- Asia > India (0.05)
- Information Technology (1.00)
- Energy > Renewable (1.00)
- Energy > Energy Storage (1.00)
- (4 more...)